On Sunday, January 12, 2020 at 7:33:40 PM UTC-5, Phil Hobbs wrote:
> On 2020-01-12 19:13,
jjhu...@gmail.com wrote:
> > On Sunday, January 12, 2020 at 5:55:08 PM UTC-5, Phil Hobbs wrote:
> >> On 2020-01-12 17:38,
jjhu...@gmail.com wrote:
> >>> On Sunday, January 12, 2020 at 3:32:06 PM UTC-5,
> >>> DecadentLinux...@decadence..org wrote:
> >>>> Phil Hobbs <
pcdhSpamM...@electrooptical.net> wrote in
> >>>>
news:fb4888b5-e96f-1145...@electrooptical.net:
> >>>>
> >>>>> Back in my one foray into big-system design, we design
> >>>>> engineers were always getting in the systems guys' faces
> >>>>> about various pieces of stupidity in the specs. It was all
> >>>>> pretty good-natured, and we wound up with the pain and
> >>>>> suffering distributed about equally.
> >>>>>
> >>>>>
> >>>>
> >>>> That is how men get work done... even 'the programmers'. Very
> >>>> well said, there.
> >>>>
> >>>> That is like the old dig on 'the hourly help'.
> >>>>
> >>>> Some programmers are very smart. Others not so much.
> >>>>
> >>>> I guess choosing to go into it is not such a smart move so
> >>>> they take a hit from the start. :-)
> >>>
> >>
> >>> If that is how men get work done then they are not using
> >>> software and system engineering techniques developed in the last
> >>> 15-20 years and their results are *still* subject to the same
> >>> types of errors. I do research and teach in this area. A number
> >>> of studies, and one in particular, cites up to 70% of software
> >>> faults are introduced on the LHS of the 'V' development model
> >>> (Other software design lifecycle models have similar fault
> >>> percentages.) A major issue is that most of these errors are
> >>> observed at integration time (software+software,
> >>> software+hardware). The cost of defect removal along the RHS of
> >>> the 'V' development model is anywhere from 50-200X of the removal
> >>> cost along the LHS of the 'V'. (no wonder why systems cost so
> >>> much)
> >>
> >> Nice rant. Could you tell us more about the 'V' model?
> >>
> >>> The talk about errors in this thread are very high level and
> >>> most ppl have the mindset that they are thinking about errors at
> >>> the unit test level. There are numerous techniques developed to
> >>> identify and fix fault types throughout the entire development
> >>> lifecycle but regrettably a lot of them are not employed.
> >>
> >> What sorts of techniques to you use to find problems in the
> >> specifications?
> >>> Actually a large percentage of the errors are discovered and
> >>> fixed at that level. Errors of the type: units mismatch, variable
> >>> type mismatch, and a slew of concurrency issues aren't discovered
> >>> till integration time. Usually, at that point, there is a 'rush'
> >>> to get the system fielded. The horror stories and lessons learned
> >>> are well documented.
> >>
> >> Yup. Leaving too much stuff for the system integration step is a
> >> very very well-known way to fail.
> >>
> >>> IDK what exactly happened (yet) with the Boeing MAX development.
> >>> I do have info from some sources that cannot be disclosed at
> >>> this point. From what I've read, there were major mistakes made
> >>> from inception through implementation and integration. My
> >>> personal view, is that one should almost never (never?) place the
> >>> task on software to correct an inherently unstable airframe
> >>> design - it is putting a bandaid on the source of the problem.
> >>
> >> It's commonly done, though, isn't it? I remember reading Ben
> >> Rich's book on the Skunk Works, where he says that the F-117's very
> >> squirrelly handling characteristics were fixed up in software to
> >> make it a beautiful plane to fly. That was about 1980.
> >>
> >>> Another major issue is the hazard analysis and fault tolerance
> >>> approach was not done at the system (the redundancy approach
> >>> was pitiful, as well as the *logic* used in implementing it as
> >>> well as conceptual.
> >>
> >>> I do think that the better software engineers do have a more
> >>> holistic view of the system (hardware knowledge + system
> >>> operational knowledge) which will allow them to ask questions
> >>> when things don't 'seem right.' OTHO, the software engineers
> >>> should not go making assumptions about things and coding to those
> >>> assumptions. (It happens more than you think) It is the job of
> >>> the software architect to ensure that any development assumptions
> >>> are captured and specified in the software architecture.
> >>
> >> In real life, though, it's super important to have two-way
> >> communications during development, no? My large-system experience
> >> was all hardware (the first civilian satellite DBS system,
> >> 1981-83), so things were quite a bit simpler than in a large
> >> software-intensive system. I'd expect the need for bottom-up
> >> communication to be greater now rather than less.
> >>
> >>> In studies I have looked at, the percentage of requirements
> >>> errors is somewhere between 30-40% of the overall number of
> >>> faults during the design lifecycle, and the 'industry standard'
> >>> approach approach to dealing with this problem is woefully
> >>> indequate despite techniques to detect and remove the errors. A
> >>> LOT Of time is spent doing software requirements tracing as
> >>> opposed to doing verification of requirements. People argue that
> >>> one cannot verify the requirements until the system has been
> >>> built - which is complete BS but industry is very slow to change.
> >>> We have shown that using software architecture modeling addresses
> >>> a large percentage of system level problems early in the design
> >>> life cycle. We are trying to convince industry. Until change
> >>> happens, the parade of failures like the MAX will continue.
> >>
> >> I'd love to hear more about that.
> >>
> >> Cheers
> >>
> >> Phil Hobbs
> >>
>
> > Sorry - I get a bit carried away on this topic... For requirements
> > engineering verification one can google: formal and semi-formal
> > requirements specification languages. RDAL and ReqSpec are ones I am
> > familiar with. Techniques to verify requirements include model
> > checking. Google model checking. Based of formal logic like LTL
> > (Linear temporal logic) CTL (Compositional Tree Logic. One constructs
> > state models from requirements and uses model checking engines to
> > analyze the structures. Model checking was actually used to verify a
> > bus protocol in the early 90s and found *lots* of problems with the
> > spec...that caused industry to 'wake up'. There are others that work
> > on code, but these are very much research-y efforts.
> >
> > Simulink has a model checker in its toolboxes (based on Promala) it
> > is quite good).
> >
> > We advocate using architecture design languages (ADL's) that is a
> > formal modeling notation to model different views of the architecture
> > and capture properties of the system from which analysis can be done
> > (e.g. signal latency, variable format and property consistency,
> > processor utilization, bandwidth capacity, hazard analysis, etc.)
> > The one that I had a hand in designing is Architecture Analysis and
> > Design Language (AADL) It is an SAE Aerospace standard. IF things
> > turn out well, it will be used on the next generation of helecopters
> > for the army. We have been piloting it use on real systems for the
> > last 2-3 years, and last 10 years on pilot studies. For systems
> > hazard analysis, google STPA (System Theoretic Process Approach)
> > spearheaded by Nancy Leveson MIT (She has consulted to Boeing).
> >
> > Yes, I've seen software applied to fix hw problems but assessing the
> > risk is complicated. The results can be catastrophic. Ok, off my
> > rant....
> >
>
> Thanks. I feel a bit like I'm drinking from a fire hose, which is
> always my preferred way of learning stuff.... I'd be super interested
> in an accessible presentation of methods for sanity-checkin high-level
> system requirements.
>
> Being constitutionally lazy, I'm a huge fan of ways to work smarter
> rather than harder. ;)
>
> Cheers
>
> Phil Hobbs
>
>
> --
> Dr Philip C D Hobbs
> Principal Consultant
> ElectroOptical Innovations LLC / Hobbs ElectroOptics
> Optics, Electro-optics, Photonics, Analog Electronics
> Briarcliff Manor NY 10510
>
>
http://electrooptical.net
>
http://hobbs-eo.com
Phil, et al....
I meant to post some information wrt your inquiry about techniques to express and analyze requirements and about model checking but got OBE.
I found this slide set that rather concisely lays out the problem & approaches to express requirements.
https://www.iaria.org/conferences2018/filesICSEA18/RadekKoci_RequirementsModellingAndSoftwareSystemsImplementation.pdf
When I read through English text requirements, I tend to do two things simultaneously: map them to some abstract component in the system hierarchy (because the written requirements are usually spread all over the system, Re-express them in a semi formal or formal notation (usually semi-formal such as state-charts, ER diagrams, sequence diagrams, interaction diagrams.) This gives me an idea if things are collectively coherent. I look for conflicts and omissions primarily.
I then take my understanding of the components and their interactions and then construct an AADL model to: understand who talks to who, data communicated, and then map requirements to the components and do analysis on the model (signal flows and latency are usually the top properties. I then try to tease out what the fault tolerance approach is and model that, keeping in mind error types and look for error flow, mitigation approaches, etc.
If there is an area that is really confusing, I'll construct state models and use model checking. Some useful tools are nuSMV,
http://nusmv.fbk.eu/
and SPIN
http://spinroot.com/spin/whatispin.html
As a note, using model checking for the engineer can be a challenge. They have not seen anything like this in undergrad or grad school unless they are leaning more in computer science. We looked at this issue 20 years ago and produced a number of reports that tried to take the approach as a tool kit and identified types of analysis and patterns that could be identified and more easily applied by an engineer unfamiliar with the area. They are somewhere on the SEI website.
Speaking of model checking, below are two of the more often cited model checking approaches and successful applications. There is little 'how to' but more of here is the problem and how we solved it. (Details left to the reader ;) )
http://www.cs.cmu.edu/~emc/papers/Conference%20Papers/95_verification_fbc_protocol.pdf
https://link.springer.com/chapter/10.1007/3-540-60973-3_102
There is a report from NASA some years ago that gave some excellent guidelines in writing requirements - I can't locate it at the moment but this website has some good guidelines, many of which were in the NASA report.
https://qracorp.com/write-clear-requirements-document/
(It still amazes me that even now, requirements docs that I've seen don't do half of these things....)
Hope this helps
J